skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ren, Kui"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract The implicit boundary integral method (IBIM) provides a framework to construct quadrature rules on regular lattices for integrals over irregular domain boundaries. This work provides a systematic error analysis for IBIMs on uniform Cartesian grids for boundaries with different degrees of regularity. First, it is shown that the quadrature error gains an additional order of$$\frac{d-1}{2}$$ d - 1 2 from the curvature for a strongly convex smooth boundary due to the “randomness” in the signed distances. This gain is discounted for degenerated convex surfaces. Then the extension of error estimate to general boundaries under some special circumstances is considered, including how quadrature error depends on the boundary’s local geometry relative to the underlying grid. Bounds on the variance of the quadrature error under random shifts and rotations of the lattices are also derived. 
    more » « less
  2. Top-k frequent items detection is a fundamental task in data stream mining. Many promising solutions are proposed to improve memory efficiency while still maintaining high accuracy for detecting the Top-k items. Despite the memory efficiency concern, the users could suffer from privacy loss if participating in the task without proper protection, since their contributed local data streams may continually leak sensitive individual information. However, most existing works solely focus on addressing either the memory-efficiency problem or the privacy concerns but seldom jointly, which cannot achieve a satisfactory tradeoff between memory efficiency, privacy protection, and detection accuracy. In this paper, we present a novel framework HG-LDP to achieve accurate Top-k item detection at bounded memory expense, while providing rigorous local differential privacy (LDP) protection. Specifically, we identify two key challenges naturally arising in the task, which reveal that directly applying existing LDP techniques will lead to an inferior accuracy-privacy-memory efficiency tradeoff. Therefore, we instantiate three advanced schemes under the framework by designing novel LDP randomization methods, which address the hurdles caused by the large size of the item domain and by the limited space of the memory. We conduct comprehensive experiments on both synthetic and real-world datasets to show that the proposed advanced schemes achieve a superior accuracy-privacy-memory efficiency tradeoff, saving 2300× memory over baseline methods when the item domain size is 41,270. Our code is anonymously open-sourced via the link. 
    more » « less
  3. Physically unclonable hardware fingerprints can be used for device authentication. The photo-response non-uniformity (PRNU) is the most reliable hardware fingerprint of digital cameras and can be conveniently extracted from images. However, we find image post-processing software may introduce extra noise into images. Part of this noise remains in the extracted PRNU fingerprints and is hard to be eliminated by traditional approaches, such as denoising filters. We define this noise as software noise, which pollutes PRNU fingerprints and interferes with authenticating a camera armed device. In this paper, we propose novel approaches for fingerprint matching, a critical step in device authentication, in the presence of software noise. We calculate the cross correlation between PRNU fingerprints of different cameras using a test statistic such as the Peak to Correlation Energy (PCE) so as to estimate software noise correlation. During fingerprint matching, we derive the ratio of the test statistic on two PRNU fingerprints of interest over the estimated software noise correlation. We denote this ratio as the fingerprint to software noise ratio (FITS), which allows us to detect the PRNU hardware noise correlation component in the test statistic for fingerprint matching. Extensive experiments over 10,000 images taken by more than 90 smartphones are conducted to validate our approaches, which outperform the state-of-the-art approaches significantly for polluted fingerprints. We are the first to study fingerprint matching with the existence of software noise. 
    more » « less
  4. The increasing demand for data-driven machine learning (ML) models has led to the emergence of model markets, where a broker collects personal data from data owners to produce high-usability ML models. To incentivize data owners to share their data, the broker needs to price data appropriately while protecting their privacy. For equitable data valuation , which is crucial in data pricing, Shapley value has become the most prevalent technique because it satisfies all four desirable properties in fairness: balance, symmetry, zero element, and additivity. For the right to be forgotten , which is stipulated by many data privacy protection laws to allow data owners to unlearn their data from trained models, the sharded structure in ML model training has become a de facto standard to reduce the cost of future unlearning by avoiding retraining the entire model from scratch. In this paper, we explore how the sharded structure for the right to be forgotten affects Shapley value for equitable data valuation in model markets. To adapt Shapley value for the sharded structure, we propose S-Shapley value, a sharded structure-based Shapley value, which satisfies four desirable properties for data valuation. Since we prove that computing S-Shapley value is #P-complete, two sampling-based methods are developed to approximate S-Shapley value. Furthermore, to efficiently update valuation results after data owners unlearn their data, we present two delta-based algorithms that estimate the change of data value instead of the data value itself. Experimental results demonstrate the efficiency and effectiveness of the proposed algorithms. 
    more » « less